-
Notifications
You must be signed in to change notification settings - Fork 87
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add payload threshold for using uTP, and show work #339
base: master
Are you sure you want to change the base?
Conversation
a76f7b7
to
c1ef07f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is referenced from hive as I was writing tests which required it. Anyways going forward I should updated the tests to use MAX_PORTAL_CONTENT_PAYLOAD_SIZE
which is defined in ethereum/trin
ethportal-api
, so it is always the right number going forward into the future. As with the native Discv5 uTP proposal (instead of tunneling uTP through TalkReq), the max size would get bigger.
Anyways I think this proposal to formalize this so future implementer's have an easier time is a good thing so...
c1ef07f
to
1af9e89
Compare
Yeah, I nearly linked to it in these specs, but decided it was probably not best practice to imply that any particular implementation owned the canonical definition of the max payload size. |
portal-wire-protocol.md
Outdated
@@ -68,6 +68,39 @@ The transmission of data that is too large to fit a single packet is done using | |||
|
|||
> The Portal wire protocol currently implements uTP over the `TALKREQ/TALKRESP` messages. Future plans are to move to the [sub-protocol data transmission](https://github.com/ethereum/devp2p/issues/229) in order to use a protocol native mechanism for establishing packet streams between clients. | |||
|
|||
Currently, the standard is to switch to uTP when the payload exceeds 1165 bytes. This may change over time, because it depends on a number of other variables. See an example derivation in rust: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great for adding this in the specifications, but I believe the value should be 1175 bytes.
That's the value that gets calculated by Fluffy at least, and IIRC this was at some point in the past also looked at as reference by Trin, but perhaps it was not taken over exactly or got altered over time.
Calculation in Fluffy happens here: https://github.com/status-im/nimbus-eth1/blob/master/fluffy/network/wire/portal_protocol.nim#L123 & https://github.com/status-im/nimbus-eth1/blob/master/fluffy/network/wire/portal_protocol.nim#L404-L406
I don't have an exact test for this, but I do remember verifying this byte per byte on a encoded talkresp
message.
I will make the const
s more clear and can add a test for this also to show.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have moved the calculation for talkreq overhead to our discv5 module and added tests there: status-im/nim-eth@aa1e738
So I'm pretty certain that these are the correct values (at least with our discv5 impl., but that shouldn't differ really).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like this test is for the max size of the Talk Response payload, but the relevant payloads for a Content message are transported in a Talk Request. Does that make a difference?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The content message adds just two additional bytes of overhead, so those need to be added still:
https://github.com/status-im/nimbus-eth1/blob/master/fluffy/network/wire/portal_protocol.nim#L391
You can also see that in our test vectors: https://github.com/ethereum/portal-network-specs/blob/master/portal-wire-test-vectors.md#content-response---content-payload
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I'll update the spec to note that we're talking about the size of the payload inside the message, and point out those extra 2 bytes.
I don't see a test in trin for this content size max, so I'll take your values here :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wait, but it looks like the message overhead is different for the different scenarios, like find nodes vs find content, so maybe I'll just ignore the 2 bytes here, and look at the total encoded payload size
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it will be different for each of the wire protocol messages. But I guess the most important one here is the one that differentiates between sending content over the protocol response or over uTP
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall, 👍 .
nitpick: The choice of rust for the code snippet seems a little counter-intuitive to me in terms of readability. Python or a similar loosely typed language that allows us to blindly deal with integers would probably have less extra notation.
# The maximum size of a portal CONTENT payload. At the time of writing, this payload either | ||
# corresponds to a `connection_id`, `enrs`, or `content` payload. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# The maximum size of a portal CONTENT payload. At the time of writing, this payload either | |
# corresponds to a `connection_id`, `enrs`, or `content` payload. | |
# The maximum size of a portal wire protocol message payload. |
Considering the 2 bytes for the portal wire msg id and the Union selector (of Content response) are ignored here, we probably should phrase it like above suggestion? Which would apply then to all message requests & responses.
We don't appear to explicitly say the threshold for switching from inline content to uTP delivery anywhere, yet. It's referenced in hive, so we should presumably actually agree on the explicit limit: https://github.com/ethereum/hive/blob/8e3ffff6ea74d20fc073f9a78baa574e7e688856/simulators/portal/src/suites/history/interop.rs#L15